Dimensionality reduction and classification using the distribution mapping exponent

نویسنده

  • Marcel Jirina
چکیده

Probability distribution mapping function, which maps multivariate data distribution to the function of one variable, is introduced. Distributionmapping exponent (DME) is something like effective dimensionality of multidimensional space. The method for classification of multivariate data is based on the local estimate of distribution mapping exponent for each point. Distances of all points of a given class of the training set from a given (unknown) point are searched and it is shown that the sum of reciprocals of the DME-th power of these distances can be used as probability density estimate. The classification quality was tested and compared with other methods using multivariate data from UCI Machine Learning Repository. The method has no tuning parameters. Introduction In this paper we deal with distances in multidimensional space and try to simplify a complex picture of probability distribution of points in this space introducing mapping functions of one variable. This variable is the distance from the given point (the query point x [3]) in a multidimensional space. From it it follows that mapping functions are different for different query points and this is the cost we pay for simplification from n variables in n-dimensional space to one variable. We will show that this cost is not very high – at least in the application presented here. The method proposed is based on the distances of the training set samples xs, s = 1, 2, ... k from point x similarly as in methods based on the nearest neighbors [1][5]. It is shown here that the sum of reciprocals of the q-th power of these distances, where q is a suitable number, it is convergent and can be used as a probability density estimate. It will be seen that the speed of convergence is the better the higher is the dimensionality and the larger q. The method reminds Parzen window approach [4], [5] but the problem with the direct application of this approach is that the step size does not satisfy a necessary convergence condition. Because of exponential nature of estimation using q, it is very close to intrinsic dimension of data or correlation dimension [9] and its estimation by the GrassbergerProcaccia’s algorithm [10][11]. The essential difference is that q is understood locally, i.e. for each point x separately, and the correlation dimension is the feature of the whole data space. It will be seen, that although q has different objective here, the algorithm is, in fact, a simplified version of Grassberger-Procaccia’s algorithm. Throughout this paper let us assume that we deal with standardized data, i.e. the individual coordinates of the samples of the learning set are standardized to zero mean and unit variance, and the same standardization constants (empirical mean and ESANN'2004 proceedings European Symposium on Artificial Neural Networks Bruges (Belgium), 28-30 April 2004, d-side publi., ISBN 2-930307-04-8, pp. 169-174

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Diagnosis of Diabetes Using an Intelligent Approach Based on Bi-Level Dimensionality Reduction and Classification Algorithms

Objective: Diabetes is one of the most common metabolic diseases. Earlier diagnosis of diabetes and treatment of hyperglycemia and related metabolic abnormalities is of vital importance. Diagnosis of diabetes via proper interpretation of the diabetes data is an important classification problem. Classification systems help the clinicians to predict the risk factors that cause the diabetes or pre...

متن کامل

2D Dimensionality Reduction Methods without Loss

In this paper, several two-dimensional extensions of principal component analysis (PCA) and linear discriminant analysis (LDA) techniques has been applied in a lossless dimensionality reduction framework, for face recognition application. In this framework, the benefits of dimensionality reduction were used to improve the performance of its predictive model, which was a support vector machine (...

متن کامل

Distribution Mapping Exponent for Multivariate Data Classification

Distribution-mapping exponent (DME) that is something like effective dimensionality of multidimensional space is introduced. The method for classification of multivariate data is based on local estimate of distribution mapping exponent for each point. Distances of all points of a given class of the training set from a given (unknown) point are searched and it is shown that the sum of reciprocal...

متن کامل

Dimensionality Reduction and Improving the Performance of Automatic Modulation Classification using Genetic Programming (RESEARCH NOTE)

This paper shows how we can make advantage of using genetic programming in selection of suitable features for automatic modulation recognition. Automatic modulation recognition is one of the essential components of modern receivers. In this regard, selection of suitable features may significantly affect the performance of the process. Simulations were conducted with 5db and 10db SNRs. Test and ...

متن کامل

انجام یک مرحله پیش پردازش قبل از مرحله استخراج ویژگی در طبقه بندی داده های تصاویر ابر طیفی

Hyperspectral data potentially contain more information than multispectral data because of their higher spectral resolution. However, the stochastic data analysis approaches that have been successfully applied to multispectral data are not as effective for hyperspectral data as well. Various investigations indicate that the key problem that causes poor performance in the stochastic approaches t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004